63 research outputs found

    BigEAR: Inferring the Ambient and Emotional Correlates from Smartphone-based Acoustic Big Data

    Get PDF
    This paper presents a novel BigEAR big data framework that employs psychological audio processing chain (PAPC) to process smartphone-based acoustic big data collected when the user performs social conversations in naturalistic scenarios. The overarching goal of BigEAR is to identify moods of the wearer from various activities such as laughing, singing, crying, arguing, and sighing. These annotations are based on ground truth relevant for psychologists who intend to monitor/infer the social context of individuals coping with breast cancer. We pursued a case study on couples coping with breast cancer to know how the conversations affect emotional and social well being. In the state-of-the-art methods, psychologists and their team have to hear the audio recordings for making these inferences by subjective evaluations that not only are time-consuming and costly, but also demand manual data coding for thousands of audio files. The BigEAR framework automates the audio analysis. We computed the accuracy of BigEAR with respect to the ground truth obtained from a human rater. Our approach yielded overall average accuracy of 88.76% on real-world data from couples coping with breast cancer.Comment: 6 pages, 10 equations, 1 Table, 5 Figures, IEEE International Workshop on Big Data Analytics for Smart and Connected Health 2016, June 27, 2016, Washington DC, US

    FIT A Fog Computing Device for Speech TeleTreatments

    Full text link
    There is an increasing demand for smart fogcomputing gateways as the size of cloud data is growing. This paper presents a Fog computing interface (FIT) for processing clinical speech data. FIT builds upon our previous work on EchoWear, a wearable technology that validated the use of smartwatches for collecting clinical speech data from patients with Parkinson's disease (PD). The fog interface is a low-power embedded system that acts as a smart interface between the smartwatch and the cloud. It collects, stores, and processes the speech data before sending speech features to secure cloud storage. We developed and validated a working prototype of FIT that enabled remote processing of clinical speech data to get speech clinical features such as loudness, short-time energy, zero-crossing rate, and spectral centroid. We used speech data from six patients with PD in their homes for validating FIT. Our results showed the efficacy of FIT as a Fog interface to translate the clinical speech processing chain (CLIP) from a cloud-based backend to a fog-based smart gateway.Comment: 3 pages, 5 figures, 1 table, 2nd IEEE International Conference on Smart Computing SMARTCOMP 2016, Missouri, USA, 201

    RESPIRE: A Spectral Kurtosis-Based Method to Extract Respiration Rate from Wearable PPG Signals

    Get PDF
    In this paper, we present the design of a wearable photoplethysmography (PPG) system, R-band for acquiring the PPG signals. PPG signals are influenced by the respiration or breathing process and hence can be used for estimation of respiration rate. R-Band detects the PPG signal that is routed to a Bluetooth low energy device such as a nearby-placed smartphone via microprocessor. Further, we developed an algorithm based on Extreme Learning Machine (ELM) regression for the estimation of respiration rate. We proposed spectral kurtosis features that are fused with the state-of the-art respiratory-induced amplitude, intensity and frequency variations-based features for the estimation of respiration rate (in units of breaths per minute). In contrast to the neural network (NN), ELM does not require tuning of hidden layer parameter and thus drastically reduces the computational cost as compared to NN trained by the standard backpropagation algorithm. We evaluated the proposed algorithm on Capnobase data available in the public domain

    Harmonic Sum-based Method for Heart Rate Estimation using PPG Signals Affected with Motion Artifacts

    Get PDF
    Wearable photoplethysmography has recently become a common technology in heart rate (HR) monitoring. General observation is that the motion artifacts change the statistics of the acquired PPG signal. Consequently, estimation of HR from such a corrupted PPG signal is challenging. However, if an accelerometer is also used to acquire the acceleration signal simultaneously, it can provide helpful information that can be used to reduce the motion artifacts in the PPG signal. By dint of repetitive movements of the subjects hands while running, the accelerometer signal is found to be quasi-periodic. Over short-time intervals, it can be modeled by a finite harmonic sum (HSUM). Using the HSUM model, we obtain an estimate of the instantaneous fundamental frequency of the accelerometer signal. Since the PPG signal is a composite of the heart rate information (that is also quasi-periodic) and the motion artifact, we fit a joint HSUM model to the PPG signal. One of the harmonic sums corresponds to the heart-beat component in PPG and the other models the motion artifact. However, the fundamental frequency of the motion artifact has already been determined from the accelerometer signal. Subsequently, the HR is estimated from the joint HSUM model. The mean absolute error in HR estimates was 0.7359 beats per minute (BPM) with a standard deviation of 0.8328 BPM for 2015 IEEE Signal Processing cup data. The ground-truth HR was obtained from the simultaneously acquired ECG for validating the accuracy of the proposed method. The proposed method is compared with four methods that were recently developed and evaluated on the same dataset

    WearLight: Towards a Wearable, Configurable Functional NIR Spectroscopy System for Noninvasive Neuroimaging

    Get PDF
    Functional near-infrared spectroscopy (fNIRS) has emerged as an effective brain monitoring technique to measure the hemodynamic response of the cortical surface. Its wide popularity and adoption in recent time attribute to its portability, ease of use, and flexibility in multimodal studies involving electroencephalography. While fNIRS is still emerging on various fronts including hardware, software, algorithm, and applications, it still requires overcoming several scientific challenges associated with brain monitoring in naturalistic environments where the human participants are allowed to move and required to perform various tasks stimulating brain behaviors. In response to these challenges and demands, we have developed a wearable fNIRS system, WearLight that was built upon an Internet-of-Things embedded architecture for onboard intelligence, configurability, and data transmission. In addition, we have pursued detailed research and comparative analysis on the design of the optodes encapsulating an near-infrared light source and a detector into 3-D printed material. We performed rigorous experimental studies on human participants to test reliability, signal-to-noise ratio, and configurability. Most importantly, we observed that WearLight has a capacity to measure hemodynamic responses in various setups including arterial occlusion on the forearm and frontal lobe brain activity during breathing exercises in a naturalistic environment. Our promising experimental results provide an evidence of preliminary clinical validation of WearLight. This encourages us to move toward intensive studies involving brain monitoring

    Hand Motion Detection in fNIRS Neuroimaging Data

    Get PDF
    As the number of people diagnosed with movement disorders is increasing, it becomes vital to design techniques that allow the better understanding of human brain in naturalistic settings. There are many brain imaging methods such as fMRI, SPECT, and MEG that provide the functional information of the brain. However, these techniques have some limitations including immobility, cost, and motion artifacts. One of the most emerging portable brain scanners available today is functional near-infrared spectroscopy (fNIRS). In this study, we have conducted fNIRS neuroimaging of seven healthy subjects while they were performing wrist tasks such as flipping their hand with the periods of rest (no movement). Different models of support vector machine is applied to these fNIRS neuroimaging data and the results show that we could classify the action and rest periods with the accuracy of over 80% for the fNIRS data of individual participants. Our results are promising and suggest that the presented classification method for fNIRS could further be applied to real-time applications such as brain computer interfacing (BCI), and into the future steps of this research to record brain activity from fNIRS and EEG, and fuse them with the body motion sensors to correlate the activities

    Visualization of Multidimensional Clinical Data from Wearables on the Web and on Apps

    Get PDF
    Health related IoT is an important field and is changing health care by recording the critical amount of patient data necessary to identify patterns in patient data. This big data requires big data visualization to be useful to clinicians. We present the design architecture of a new visualization tool based on web and mobile platforms for use in the clinical setting. Implementation information of four diagram views of chronological patient data are discussed, representing both continuous calendars and days as pie charts

    Smart fog: Fog computing framework for unsupervised clustering analytics in wearable Internet of Things

    Get PDF
    The increasing use of wearables in smart telehealth system led to the generation of large medical big data. Cloud and fog services leverage these data for assisting clinical procedures. IoT Healthcare has been benefited from this large pool of generated data. This paper suggests the use of low-resource machine learning on Fog devices kept close to wearables for smart telehealth. For traditional telecare systems, the signal processing and machine learning modules are deployed in the cloud that processes physiological data. This paper presents a Fog architecture that relied on unsupervised machine learning big data analysis for discovering patterns in physiological data. We developed a prototype using Intel Edison and Raspberry Pi that was tested on real-world pathological speech data from telemonitoring of patients with Parkinson\u27s disease (PD). Proposed architecture employed machine learning for analysis of pathological speech data obtained from smart watches worn by the patients with PD. Results show that proposed architecture is promising for low-resource machine learning. It could be useful for other applications within wearable IoT for smart telehealth scenarios by translating machine learning approaches from the cloud backend to edge computing devices such as Fog
    • …
    corecore